Keynotes
We are pleased to announce the following keynote speakers for CVMP 2025:
Angela Dai, Technical University of Munich
TBD (Title to be announced)
Abstract and talk details will be announced soon.
Angela Dai is an Associate Professor at the Technical University of Munich where she leads the 3D AI Lab. Angela’s research focuses on understanding how real-world 3D scenes around us can be modeled and semantically understood. Previously, she received her PhD in computer science from Stanford in 2018, advised by Pat Hanrahan, and her BSE in computer science from Princeton in 2013. Her research has been recognized through an ECVA Young Researcher Award, ERC Starting Grant, Eurographics Young Researcher Award, German Pattern Recognition Award, Google Research Scholar Award, and an ACM SIGGRAPH Outstanding Doctoral Dissertation Honorable Mention.

Yi-Zhe Song, University of Surrey
Sketch-based Interfaces for Democratising AI-Powered Creative Tools
This keynote examines how sketch-based interfaces democratise AI-powered creative tools, progressing from 2D recognition to immersive 3D generation. Drawing from our decade-long research journey, I demonstrate why sketching represents an essential human-AI interface through its unique balance of simplicity and expressive power. Beginning with Sketch-a-Net, which first surpassed human sketch recognition performance, I establish the foundational principles that enable all sketch-based AI systems. These insights drove practical applications in fine-grained sketch-based image retrieval, proving that simple drawings can unlock complex visual searches more intuitively than text. The talk then explores how 2D sketches enable 3D capabilities. We first show that tablet sketches can effectively retrieve and generate complex 3D models, bridging dimensional barriers without requiring specialised expertise. Moving to the frontier of VR sketching, I present our advances in 3D sketch representation learning that reveal how spatial strokes encode geometric information differently from 2D drawings. The talk concludes with our latest frameworks that enable high-resolution generation on consumer hardware, which will serve (in time) our vision for accessible creative AI: where sketch-based interfaces make advanced capabilities available to all users regardless of technical expertise.
Yi-Zhe Song is Professor of Computer Vision and Machine Learning at the Centre for Vision Speech and Signal Processing (CVSSP) and co-director of the Surrey People-Centred AI Institute. As founder and leader of the SketchX Lab (est. 2012), he has driven groundbreaking research in sketch understanding, including the first deep neural network to surpass human performance in sketch recognition (BMVC 2015 Best Paper Award). His work spans fine-grained sketch-based image retrieval, domain generalisation, and bridging sketch with mainstream computer vision, with recent contributions in sketch-based object recognition earning a Best Paper nomination at CVPR 2023. He serves as Associate Editor for IEEE TPAMI and IJCV and has been Area Chair for ECCV, CVPR, and ICCV. Prof. Song established and directs Surrey’s MSc in AI programme, following a similar initiative he created at Queen Mary University of London.
